摘要 :
In this paper, the classification accuracy of galaxy images is demonstrated to be improved by enhancing the galaxy images. Galaxy images often contain faint regions that are of similar intensity to stars and the image background, ...
展开
In this paper, the classification accuracy of galaxy images is demonstrated to be improved by enhancing the galaxy images. Galaxy images often contain faint regions that are of similar intensity to stars and the image background, resulting in data loss during background subtraction and galaxy segmentation. Enhancement darkens these faint regions, enabling them to be distinguished from other objects in the image and the image background, relative to their original intensities. The heap transform is employed for the purpose of enhancement. Segmentation then produces a galaxy image which closely resembles the structure of the original galaxy image, and one that is suitable for further processing and classification. Morphological feature descriptors are applied to the segmented images after a preprocessing stage and used to extract the galaxy image structure for use in training the classifier. The support vector machine learning algorithm performs training and validation of the original and enhanced data, and a comparison between the classification accuracy of each data set is included. Principal component analysis is used to compress the data sets for the purpose of classification visualization and a comparison between the reduced and original feature spaces. Future directions for this research include galaxy image enhancement by various methods, and classification performed with the use of a sparse dictionary. Both future directions are introduced.
收起
摘要 :
In this paper, the classification accuracy of galaxy images is demonstrated to be improved by enhancing the galaxy images. Galaxy images often contain faint regions that are of similar intensity to stars and the image background, ...
展开
In this paper, the classification accuracy of galaxy images is demonstrated to be improved by enhancing the galaxy images. Galaxy images often contain faint regions that are of similar intensity to stars and the image background, resulting in data loss during background subtraction and galaxy segmentation. Enhancement darkens these faint regions, enabling them to be distinguished from other objects in the image and the image background, relative to their original intensities. The heap transform is employed for the purpose of enhancement. Segmentation then produces a galaxy image which closely resembles the structure of the original galaxy image, and one that is suitable for further processing and classification. Morphological feature descriptors are applied to the segmented images after a preprocessing stage and used to extract the galaxy image structure for use in training the classifier. The support vector machine learning algorithm performs training and validation of the original and enhanced data, and a comparison between the classification accuracy of each data set is included. Principal component analysis is used to compress the data sets for the purpose of classification visualization and a comparison between the reduced and original feature spaces. Future directions for this research include galaxy image enhancement by various methods, and classification performed with the use of a sparse dictionary. Both future directions are introduced.
收起
摘要 :
During critical situations, the precise digital processing of medical signals such as heartbeats is essential. Outside noise introduced into this data can lead to misinterpretation. Thus, it is important to be able to detect and c...
展开
During critical situations, the precise digital processing of medical signals such as heartbeats is essential. Outside noise introduced into this data can lead to misinterpretation. Thus, it is important to be able to detect and correct the signal quickly and efficiently using digital filtering algorithms. With filtering, the goal is to remove noise locations by correctly identify the corrupted data points and replacing these locations with acceptable estimations of the original values. However, one has to be careful throughout the filtering process not to also eliminate other important detailed information from the original signal. If the filtered output is to be analyzed post-filtering, say for feature recognition, it is important that both the structure and details of the original clean signal remain. This paper presents an original algorithm and two variations, all using the logical transform, that strive to do this accurately and with low levels of computation. Using real heartbeat signals as test sets, the output is compared to that produced by median type filters, and results demonstrated over a variety of noise levels.
收起
摘要 :
During critical situations, the precise digital processing of medical signals such as heartbeats is essential. Outside noise introduced into this data can lead to misinterpretation. Thus, it is important to be able to detect and c...
展开
During critical situations, the precise digital processing of medical signals such as heartbeats is essential. Outside noise introduced into this data can lead to misinterpretation. Thus, it is important to be able to detect and correct the signal quickly and efficiently using digital filtering algorithms. With filtering, the goal is to remove noise locations by correctly identify the corrupted data points and replacing these locations with acceptable estimations of the original values. However, one has to be careful throughout the filtering process not to also eliminate other important detailed information from the original signal. If the filtered output is to be analyzed post-filtering, say for feature recognition, it is important that both the structure and details of the original clean signal remain. This paper presents an original algorithm and two variations, all using the logical transform, that strive to do this accurately and with low levels of computation. Using real heartbeat signals as test sets, the output is compared to that produced by median type filters, and results demonstrated over a variety of noise levels.
收起
摘要 :
During critical situations, the precise digital processing of medical signals such as heartbeats is essential. Outside noise introduced into this data can lead to misinterpretation. Thus, it is important to be able to detect and c...
展开
During critical situations, the precise digital processing of medical signals such as heartbeats is essential. Outside noise introduced into this data can lead to misinterpretation. Thus, it is important to be able to detect and correct the signal quickly and efficiently using digital filtering algorithms. With filtering, the goal is to remove noise locations by correctly identify the corrupted data points and replacing these locations with acceptable estimations of the original values. However, one has to be careful throughout the filtering process not to also eliminate other important detailed information from the original signal. If the filtered output is to be analyzed post-filtering, say for feature recognition, it is important that both the structure and details of the original clean signal remain. This paper presents an original algorithm and two variations, all using the logical transform, that strive to do this accurately and with low levels of computation. Using real heartbeat signals as test sets, the output is compared to that produced by median type filters, and results demonstrated over a variety of noise levels.
收起
摘要 :
The Least Significant Bit (LSB) embedding technique is a well-known and broadly employed method in multimedia steganography, used mainly in applications involving single bit-plane manipulations in the spatial domain. The key advan...
展开
The Least Significant Bit (LSB) embedding technique is a well-known and broadly employed method in multimedia steganography, used mainly in applications involving single bit-plane manipulations in the spatial domain. The key advantages of LSB procedures are they are simple to understand, easy to implement, have high embedding capacity, and can be resistant to steganalysis attacks. Additionally, the LSB approach has spawned numerous applications and can be used as the basis of more complex techniques for multimedia data embedding. In the last several decades, hundreds of new LSB or LSB variant methods have been developed in an effort to optimize capacity while minimizing detectability, taking advantage of the overall simplicity of this method. LSB-steganalysis research has also intensified in an effort to find new or improved ways to evaluate the performance of this widely used steganographic system. This paper reviews and categorizes some of these major techniques of LSB embedding, focusing specifically on the spatial domain. Some justification for establishing and identifying promising uses of a proposed SD-LSB centric taxonomy are discussed. Specifically, we define a new taxonomy for SD-LSB embedding techniques with the goal of aiding researchers in tool classification methodologies that can lead to advances in the state-of-the-art in steganography. With a common framework to work with, researchers can begin to more concretely identify core tools and common techniques to establish common standards of practice for steganography in general. Finally, we provide a summary on some of the most common LSB embedding techniques followed by a proposed taxonomy standard for steganalysis.
收起
摘要 :
The Least Significant Bit (LSB) embedding technique is a well-known and broadly employed method in multimedia steganography, used mainly in applications involving single bit-plane manipulations in the spatial domain. The key advan...
展开
The Least Significant Bit (LSB) embedding technique is a well-known and broadly employed method in multimedia steganography, used mainly in applications involving single bit-plane manipulations in the spatial domain. The key advantages of LSB procedures are they are simple to understand, easy to implement, have high embedding capacity, and can be resistant to steganalysis attacks. Additionally, the LSB approach has spawned numerous applications and can be used as the basis of more complex techniques for multimedia data embedding. In the last several decades, hundreds of new LSB or LSB variant methods have been developed in an effort to optimize capacity while minimizing detectability, taking advantage of the overall simplicity of this method. LSB-steganalysis research has also intensified in an effort to find new or improved ways to evaluate the performance of this widely used steganographic system. This paper reviews and categorizes some of these major techniques of LSB embedding, focusing specifically on the spatial domain. Some justification for establishing and identifying promising uses of a proposed SD-LSB centric taxonomy are discussed. Specifically, we define a new taxonomy for SD-LSB embedding techniques with the goal of aiding researchers in tool classification methodologies that can lead to advances in the state-of-the-art in steganography. With a common framework to work with, researchers can begin to more concretely identify core tools and common techniques to establish common standards of practice for steganography in general. Finally, we provide a summary on some of the most common LSB embedding techniques followed by a proposed taxonomy standard for steganalysis.
收起
摘要 :
This paper introduces a new redundant number system, the adjunctive numerical relation (ANR) codes, which offer improvements over other well known systems such as the Fibonacci, Lucas, and the Prime number systems when used in mul...
展开
This paper introduces a new redundant number system, the adjunctive numerical relation (ANR) codes, which offer improvements over other well known systems such as the Fibonacci, Lucas, and the Prime number systems when used in multimedia data hiding applications. It will be shown that this new redundant number system has potential applications in digital communications, signal, and image processing, the paper will also offer two illustrative applications for this new redundant coding system. First an enhanced bit-plane decomposition of image formatted files with data embedding (steganography and watermarking). Secondly, an example of an expanded bit-line decomposition of audio formatted files with data embedding and index-based retrieval capability will be described. The computer simulations will detail the statistical stability required for effective data encoding techniques and demonstrate the improvements in the embedding capacity in multimedia carriers.
收起
摘要 :
This paper introduces a new redundant number system, the adjunctive numerical relation (ANR) codes, which offer improvements over other well known systems such as the Fibonacci, Lucas, and the Prime number systems when used in mul...
展开
This paper introduces a new redundant number system, the adjunctive numerical relation (ANR) codes, which offer improvements over other well known systems such as the Fibonacci, Lucas, and the Prime number systems when used in multimedia data hiding applications. It will be shown that this new redundant number system has potential applications in digital communications, signal, and image processing. the paper will also offer two illustrative applications for this new redundant coding system. First an enhanced bit-plane decomposition of image formatted files with data embedding (steganography and watermarking). Secondly, an example of an expanded bit-line decomposition of audio formatted files with data embedding and index-based retrieval capability will be described. The computer simulations will detail the statistical stability required for effective data encoding techniques and demonstrate the improvements in the embedding capacity in multimedia carriers.
收起
摘要 :
This paper introduces a new redundant number system, the adjunctive numerical relation (ANR) codes, which offer improvements over other well known systems such as the Fibonacci, Lucas, and the Prime number systems when used in mul...
展开
This paper introduces a new redundant number system, the adjunctive numerical relation (ANR) codes, which offer improvements over other well known systems such as the Fibonacci, Lucas, and the Prime number systems when used in multimedia data hiding applications. It will be shown that this new redundant number system has potential applications in digital communications, signal, and image processing, the paper will also offer two illustrative applications for this new redundant coding system. First an enhanced bit-plane decomposition of image formatted files with data embedding (steganography and watermarking). Secondly, an example of an expanded bit-line decomposition of audio formatted files with data embedding and index-based retrieval capability will be described. The computer simulations will detail the statistical stability required for effective data encoding techniques and demonstrate the improvements in the embedding capacity in multimedia carriers.
收起